Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realize global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissues structures. Inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting anatomies of 133 structures in brain, 14 organs in abdomen, 4 hierarchical components in kidney, and inter-connected kidney tumors). We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in single network, outperforms prior state-of-the-art method SLANT27 ensembled with 27 network tiles, our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively.
translated by 谷歌翻译
从单个放射学图像中学到的功能无法提供有关随着时间的流逝可能发生的病变以及多少变化的信息。从重复图像计算出的时间相关特征可以捕获这些变化,并通过其时间行为来识别恶性病变。但是,纵向医学成像提出了数据获取的稀疏,不规则时间间隔的独特挑战。虽然自我注意事项已被证明是时间序列和自然图像的一种多功能,有效的学习机制,但尚未探索其在稀疏,不规则采样的空​​间特征之间解释时间距离的潜力。在这项工作中,我们通过使用(1)连续时间的矢量嵌入和(2)时间强调自我注意力的权重来提出两种解释时间距离视觉变压器(VIT)。这两种算法是根据合成肺结节的良性与恶性肺癌区分和肺筛查计算机断层扫描研究(NLST)评估的。与标准VIT相比,评估合成结节的时间段VIT的实验表明,在对不规则采样的纵向图像进行分类方面有了基本改进。在从NLST筛选胸部CTS的交叉验证中,我们的方法(分别为0.785和0.786 AUC)显着超过了横截面方法(0.734 AUC)(0.734 AUC),并匹配领先的纵向医学成像算法(0.779 AUC)在良好的良性上的判别性能与恶性分类。这项工作代表了第一个基于自我注意的框架,用于对纵向医学图像进行分类。我们的代码可从https://github.com/tom1193/time-distance-transformer获得。
translated by 谷歌翻译
对于人类,使用视觉信号了解对象之间的关系是直观的。但是,对于人工智能,这项任务仍然具有挑战性。研究人员在研究语义关系检测方面取得了重大进展,例如人类对象的相互作用检测和视觉关系检测。我们将视觉关系的研究从语义到几何发展迈进了一步。在具体上,我们预测相对阻塞和相对距离关系。但是,从单个图像中检测这些关系具有挑战性。强制集中注意特定于任务的区域在成功检测这些关系方面起着关键作用。在这项工作中,(1)我们提出了一种新颖的三年级架构,作为集中注意力的基础架构。 2)我们使用广义交叉框预测任务有效地指导我们的模型专注于遮挡特定区域; 3)我们的模型在距离感知关系检测方面实现了新的最新性能。具体而言,我们的模型将F1分数从33.8%提高到38.6%,并将闭塞F1得分从34.4%提高到41.2%。我们的代码公开可用。
translated by 谷歌翻译
尽管深度学习预测模型在歧视不同阶层方面已经成功,但它们通常会遭受跨越包括医疗保健在内的具有挑战性领域的校准不良。此外,长尾分布在深度学习分类问题(包括临床疾病预测)中构成了巨大挑战。最近提出了一些方法来校准计算机视觉中的深入预测,但是没有发现代表模型如何在不同挑战性的环境中起作用。在本文中,我们通过对四个高影响力校准模型的比较研究来弥合从计算机视觉到医学成像的置信度校准。我们的研究是在不同的情况下进行的(自然图像分类和肺癌风险估计),包括在平衡与不平衡训练集以及计算机视觉与医学成像中进行。我们的结果支持关键发现:(1)我们获得了新的结​​论,这些结论未在不同的学习环境中进行研究,例如,结合两个校准模型,这些模型都可以减轻过度启发的预测,从而导致了不足的预测,并且来自计算机视觉模型的更简单的校准模型域往往更容易被医学成像化。 (2)我们强调了一般计算机视觉任务和医学成像预测之间的差距,例如,校准方法是通用计算机视觉任务的理想选择,实际上可能会损坏医学成像预测的校准。 (3)我们还加强了自然图像分类设置的先前结论。我们认为,这项研究的优点可以指导读者选择校准模型,并了解一般计算机视觉和医学成像域之间的差距。
translated by 谷歌翻译
名义隐喻经常用于人类语言,并已被证明可以说服,表达情感和刺激兴趣。本文解决了中国名义上隐喻(NM)一代的问题。我们引入了一个新颖的多任务框架,该框架共同优化了三个任务:NM识别,NM组件识别和NM生成。隐喻识别模块能够执行自我训练程序,该程序从大型未标记的语料库中发现了新的隐喻,以进行NM生成。 NM组件识别模块在训练和条件下强调了这些NM组件上的成分,以获得更连贯的结果。为了训练NM识别和组件识别模块,我们构建了一个注释的语料库,该语料由6.3K句子组成,其中包含多种隐喻模式。自动指标表明,我们的方法可以产生具有良好可读性的多种隐喻,其中92%是新颖的隐喻比较。人类评估表明,我们的模型在一致性和创造力方面显着优于基准。
translated by 谷歌翻译
视觉变形金刚(VIT)S表现出可观的全球和本地陈述的自我监督学习表现,可以转移到下游应用程序。灵感来自这些结果,我们介绍了一种新的自我监督学习框架,具有用于医学图像分析的定制代理任务。具体而言,我们提出:(i)以新的3D变压器为基础的型号,被称为往返变压器(Swin Unet),具有分层编码器,用于自我监督的预训练; (ii)用于学习人类解剖学潜在模式的定制代理任务。我们展示了来自各种身体器官的5,050个公共可用的计算机断层扫描(CT)图像的提出模型的成功预培训。通过微调超出颅穹窿(BTCV)分割挑战的预先调整训练模型和来自医疗细分牌组(MSD)数据集的分割任务,通过微调训练有素的模型来验证我们的方法的有效性。我们的模型目前是MSD和BTCV数据集的公共测试排行榜上的最先进的(即第1号)。代码:https://monai.io/research/swin-unetr.
translated by 谷歌翻译
准确和高效的点云注册是一个挑战,因为噪音和大量积分影响了对应搜索。这一挑战仍然是一个剩余的研究问题,因为大多数现有方法都依赖于对应搜索。为了解决这一挑战,我们通过调查深生成的神经网络来点云注册来提出新的数据驱动登记算法。给定两个点云,动机是直接生成对齐的点云,这在许多应用中非常有用,如3D匹配和搜索。我们设计了一个端到端的生成神经网络,用于对齐点云生成以实现这种动机,包含三种新组件。首先,提出了一种点多感知层(MLP)混频器(PointMixer)网络以便在自点云中有效地维护全局和局部结构信息。其次,提出了一种特征交互模块来融合来自交叉点云的信息。第三,提出了一种并行和差分样本共识方法来基于所生成的登记结果计算输入点云的变换矩阵。所提出的生成神经网络通过维持数据分布和结构相似度,在GAN框架中训练。 ModelNet40和7Scene数据集的实验表明,所提出的算法实现了最先进的准确性和效率。值得注意的是,与基于最先进的对应的算法相比,我们的方法减少了注册错误(CD)的$ 2 \次数为$ 12 \倍运行时间。
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Classification using supervised learning requires annotating a large amount of classes-balanced data for model training and testing. This has practically limited the scope of applications with supervised learning, in particular deep learning. To address the issues associated with limited and imbalanced data, this paper introduces a sample-efficient co-supervised learning paradigm (SEC-CGAN), in which a conditional generative adversarial network (CGAN) is trained alongside the classifier and supplements semantics-conditioned, confidence-aware synthesized examples to the annotated data during the training process. In this setting, the CGAN not only serves as a co-supervisor but also provides complementary quality examples to aid the classifier training in an end-to-end fashion. Experiments demonstrate that the proposed SEC-CGAN outperforms the external classifier GAN (EC-GAN) and a baseline ResNet-18 classifier. For the comparison, all classifiers in above methods adopt the ResNet-18 architecture as the backbone. Particularly, for the Street View House Numbers dataset, using the 5% of training data, a test accuracy of 90.26% is achieved by SEC-CGAN as opposed to 88.59% by EC-GAN and 87.17% by the baseline classifier; for the highway image dataset, using the 10% of training data, a test accuracy of 98.27% is achieved by SEC-CGAN, compared to 97.84% by EC-GAN and 95.52% by the baseline classifier.
translated by 谷歌翻译